characterizing emergent representation
Characterizing emergent representations in a space of candidate learning rules for deep networks
How are sensory representations learned via experience? Deep learning offers a theoretical toolkit for studying how neural codes emerge under different learning rules. Studies suggesting that representations in deep networks resemble those in biological brains have mostly relied on one specific learning rule: gradient descent, the workhorse behind modern deep learning. However, it remains unclear how robust these emergent representations in deep networks are to this specific choice of learning algorithm. Here we present a continuous two-dimensional space of candidate learning rules, parameterized by levels of top-down feedback and Hebbian learning.
Supplementary Material for Characterizing emergent representations in a space of candidate learning rules for deep networks
We apply singular value decomposition (SVD) to the dataset's input-output correlation matrix to extract the component of the input-output mapping for different hierarchical levels. To compute the strength of a network's input-output mapping for these hierarchical distinctions This author is now affiliated to University Medical Center Hamburg-Eppendorf, Hamburg, Germany. The task is to link each object's perceptual representation ( However, it seems critical to demonstrate that our framework is robust against a modification of this assumption about input structure. Here, we show that the conclusions presented in the main paper remain unchanged even if we relax the assumption of one-hot vectors (which are similar to grandmother-cell neurons: each object is represented by a dedicated single neuron). The differences in learning dynamics across different learning rules within the 2D space are robust against the shift from localist assumption to the current distributed assumption (Supp.
- Europe > Germany > Hamburg (0.24)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.15)
- North America > Canada (0.04)
Review for NeurIPS paper: Characterizing emergent representations in a space of candidate learning rules for deep networks
Weaknesses: The exploration of learning rules with only two variable parameters is too restrictive. There have already been numerous learning rules proposed in the field but they do not fit into the proposed scheme. Even with two parameters characterizing feedback and Hebbian learning, a single additive mixture is only one possibility. For example, a recent development is the study of tri-factor learning rules [1-3]. There are also learning rules that deal with special cases such as hierarchical patterns that this paper studies; a family of (static) learning rules have been proposed many years ago [4].
Review for NeurIPS paper: Characterizing emergent representations in a space of candidate learning rules for deep networks
This paper asks a set of well-motivated questions about learning of sensory representations by biological brains through experience, and proposes a continuous two-dimensional space of candidate learning rules, parameterized by levels of top-down feedback and Hebbian learning. They first show that this space contains five important candidate learning algorithms as specific points, such as Gradient Descent and Contrastive Hebbian. They then analyze the learning dynamics of these rules in a linear network with one hidden layer, trained to learn a hierarchy of concepts, following the previous work, and identify zones where deep networks exhibit qualitative signatures of biological learning. The work includes an interesting way to parameterize learning rules and aims to tackles the well-motivated problem of characterizing which learning rule is implemented in the biological brain. The model used in the paper is overly simple.
Characterizing emergent representations in a space of candidate learning rules for deep networks
How are sensory representations learned via experience? Deep learning offers a theoretical toolkit for studying how neural codes emerge under different learning rules. Studies suggesting that representations in deep networks resemble those in biological brains have mostly relied on one specific learning rule: gradient descent, the workhorse behind modern deep learning. However, it remains unclear how robust these emergent representations in deep networks are to this specific choice of learning algorithm. Here we present a continuous two-dimensional space of candidate learning rules, parameterized by levels of top-down feedback and Hebbian learning.